![]() distributed cache for graphics data
专利摘要:
CACHE DISTRIBUTED FOR GRAPHIC DATA.A distributed caching system for storing and serving modeled information such as a graph that includes nodes and borders that define associations or relationships between nodes that the edges connect on the graph. 公开号:BR112013016900A2 申请号:R112013016900-1 申请日:2011-11-30 公开日:2020-10-27 发明作者:Venkateshwaran Venkataramani 申请人:Facebook, Inc.; IPC主号:
专利说明:
“DISTRIBUTED CACHE FOR GRAPHIC DATA” Technical field This disclosure relates, in general, to the storage and supply of graphic data, and more particularly, to the storage and supply of graphic data with a distributed cache system. Previous Computer users are able to access and share large amounts of information over various local and remote computer networks including proprietary networks, as well as public networks, such as the Internet. Typically, a web browser installed on a user's computing device facilitates access and interaction with information located on various network servers identified, for example, by associated uniform resource locators (URLs). Conventional approaches to enable user-generated content sharing include various information sharing technologies or platforms, such as social networking sites. Such network sites may include, be linked with, or provide a platform for applications allowing users to view network pages created or customized by other users where the visibility and interaction with such pages by other users are managed by some characteristic set of rules. Such information from social networks, and most information in general, is typically stored in relational databases. In general, a relational database is a collection of relationships (often referred to as tables). Relational databases use a set of mathematical terms, which can use structured query language (SQL) database terminology. For example, a relationship can be defined as a set of tuples that have the same attributes. Generally, a multiplier represents an object and information about that object. A relationship is generally described as a table, which is organized into rows and columns. In general, all data referenced by an attribute are in the same domain and comply with the same restrictions. The relational model specifies that the tuples of a relationship have no specific order and that the tuples, in turn, do not impose order on the attributes. Applications access data by specifying queries, which use operations to identify tuples, identify attributes and to combine relationships. Relationships can be modified and new tuples can provide explicit values or be derived from a query. Similarly, queries identify tuples to update or delete. Each tuple in a relationship must be uniquely identifiable by some combination (one or more) of its attribute values. This combination is referred to as the primary key. In a database relational data, all data is stored and accessed via relationships. Relationships that store data are typically implemented with or cited as tables. Relational databases, as implemented in relational database management systems, have become a predominant choice for storage — all information in databases used for, for example, financial records, manufacturing and logistics information, personal data and other applications. As the power of the computer increased, the inefficiencies of relational databases, which made them impractical in previous times, became important due to their ease of use for conventional applications. The three main open source implementations are MySQL, PostgreSQL and SQLite. MySQL is a relational database management system (RDBMS) that functions as a server providing access by multiple users to a number of databases. The “M” in the acronym for the popular LAMP software stack refers to MySQL. Its popularity for use with network applications is closely linked to the popularity of PHP (the “P” in LAMP). Several sites on the high traffic network use MySQL for data storage and user data logging. Since communication with relational databases is often a speed bottleneck, many networks use caching systems to provide private information queries. For example, Memcached is a general purpose distributed memory cache system. It is often used to speed up network sites driven by dynamic databases by caching data and objects in RAM to reduce the number of times an external data source (such as a database or API) needs to be read. The Memcached APIs provide a giant hash table distributed across multiple machines. When the table is complete, subsequent insertions cause the oldest data to be removed in the order of the least recently used (LRU). Applications using Memcached typically layer requests and additions into the kernel before yielding to slower support storage, such as a database. The Memcached system uses a client-server architecture. The servers maintain an associative formation of the key value; customers complete this training and consult. Clients use libraries on the client side to contact the servers. Typically, each client knows all the servers and the servers do not communicate with each other. If a customer wants to define or read the value corresponding to a certain key, the customer's library first computes a proof of the key to determine the server that will be used. The client then contacts that server. The server will calculate a second proof of the key to determine where to store or read the corresponding value. Typically, the servers keep the values in RAM; if a server runs out of RAM, it discards the oldest values. Therefore, customers need to treat Memcached as a transient cache; they cannot assume that the data stored in Memcached is still there when they need it. Brief description of the drawings Figure 1 illustrates an exemplary cache system architecture according to an implementation of the invention. Figure 2 illustrates an exemplary computer system architecture. Figure 3 shows an exemplary network environment. Figure 4 shows a flowchart illustrating an exemplary method for adding a new association to a chart. Figure 5 is a schematic diagram illustrating an exemplary message flow between various components of a cache system. Figure 6 shows a flow chart illustrating an exemplary method for processing changes in graphical data. Figure 7 is a schematic diagram illustrating an exemplary message flow for several components of a cache system. Description of the exemplary modalities Particular modalities refer to a distributed cache system for storing and providing information modeled as a graph that includes nodes and borders that define associations or relationships between nodes that the edges connect in the chart. In particular modalities, the graph is, or includes, a social graph and the distributed cache system is part of a larger network system, infrastructure or platform, which enables an integrated social network environment. In the present disclosure, the social network environment can be described in terms of a social graph including information from the social graph. In fact, particular modes of the present disclosure rely on, exploit or make use of the fact that most or all of the data stored by or for the social network environment can be represented as a social graph. Particular modalities provide a cost effective infrastructure that can support, in an efficient, intelligent and successful way, the exponentially increasing number of users of the social network environment, such as the one described here. In particular modalities, the distributed cache system and backend infrastructure described here provide one or more of: low latency in scale, a lower cost per request, an easy-to-use structure for developers, an infrastructure that supports multiple masters, an infrastructure that provides access to data stored on clients written in languages other than the hypertext pre-processor (PHP), an infrastructure that allows combined queries involving both associations (borders) and objects (nodes) of a social graph as described by way of example here and an infrastructure that allows different persistent data stores to be used for different types of data. In addition, particular modalities provide one or more of: an infrastructure that enables a clear separation of the cache data access API from the cache infrastructure + persistence + duplication, an infrastructure that supports direct write / read cache direct, an infrastructure that moves computations closer to the data, an infrastructure that allows transparent migration to different storage schemes and back-ends, and an infrastructure that improves the efficiency of data object access. Additionally, as used here, "or" may imply "e", as well as "or", "that is", "or" does not necessarily exclude "e", unless explicitly stated or implicitly implied. Particular modalities can operate in a remote network environment, such as the Internet, including multiple addressable network systems. Figure 3 illustrates an exemplary network environment, in which several exemplary modes can operate. The cloud of network 60 generally represents one or more interconnected networks, through which the systems and hosts described here can communicate. Network cloud 60 may include remote packet-based networks (such as the Internet), private networks, wireless networks, satellite networks, cellular networks, paging networks and so on. As shown in Figure 3, particular modalities can operate in a network environment comprising the social network system 20 and one or more client devices 30. Client devices 30 are operably connected in the network environment via a service provider. network service, a wireless carrier or any other suitable means. In an exemplary embodiment, the social network system 20 comprises computer systems that allow users to communicate or otherwise interact with each other and access content, such as user profiles, as described here. The social networking system 20 is an addressable network system that, in several exemplary embodiments, comprises one or more physical servers 22 and data storage 24. The one or more physical servers 22 are operably connected to the computer network 60 via , by way of example, a set of routers and / or network switches 26. In an exemplary embodiment, the functionality hosted by one or more physical servers 22 may include network or HTTP servers, FTP servers, as well as, without limitation, network pages and applications implemented using the common gate interface (CGI) script, PHP hypertext preprocessor (PHP), active server pages (ASP), hypertext markup language (HTML), language extensible markup (XML), Java, JavaScript, asynchronous JavaScript and XML (AJAX), and so on. Physical servers 22 can host functionality targeted to the operations of the social network system 20. By way of example, the social network system 20 can host a network site that allows one or more users, on one or more devices. - from customer 30, see and post information, as well as communicate through the website. In the following, servers 22 can be called as server 22, although server 22 may include numerous servers hosting, for example, the social networking system 20, as well as other content distribution servers, data stores and databases. Dice. Data storage 24 can store content and data related to, and enabling, the operation of the social network system as digital data objects. A data object, in particular implementations, is an item of digital information typically stored or personified in a data file, database or record. Content objects can take many forms, including: text (for example, ASCII, SGML, HTML), images (for example, jpeg, tif and gif), graphics (based on vector or bitmap), audio, video ( for example, mpeg) or other multimedia and combinations of these. Content object data can also include executable code objects (for example, executable games within a browser window or frame), podcasts, etc. Logically, data storage 24 corresponds to one or more of a variety of banks separate and integrated data, such as relational databases and object-oriented databases, which maintain information as an integrated collection of logically related records or files stored in one or more physical systems. Structurally, data storage 24 can generally include one or more of a large class of data storage and management systems. In particular embodiments, data storage 24 can be implemented by any suitable physical system (s) including components, such as one or more database servers, mass storage media, media library systems, storage area networks, data storage clouds and so on. In an exemplary embodiment, data storage 24 includes one or more servers, databases (for example, MySQL) and / or data stores. The data store 24 can include data associated with users and / or client devices 30 from a different social network system 20. In particular modalities, the social network system 20 maintains a user profile for each user of the system 20. User profiles include data describing the users of a social network, - which can include, for example, proper names (first, intermediate and last of a person, a business name and / or the name of the company of a business entity, etc.), biographical, demographic and other types of descriptive information, such as professional experience, educational history , hobbies or preferences, geographic location and additional descriptive data. For example, user profiles can include a user's birth date, relationship status, city of residence, and so on. System 20 can also store data describing one or more relationships between different users. Relationship information can indicate users who have professional experience similar or common educational associations, hobbies or educational history. A user profile can also include privacy settings controlling access to user information by other users. The client device 30 is generally a computer or computing device including functionality to communicate (for example, remotely) over a computer network. The client device 30 can be a desktop computer, laptop computer, personal digital assistant (PDA), navigation system inside or outside the car, smartphone or other cell or mobile phone, or mobile game device, among other suitable computing devices. Client device 30 can run one or more client applications, such as a web browser (for example, Microsoft Windows Internet Explorer, Mozilla Firefox, Apple Safari, Google Chrome and Opera, etc.) to access and view the content over a computer network. In particular implementations, client applications allow a user of the client device 30 to enter addresses of specific network resources to be retrieved, such as resources hosted by the social networking system 20. These addresses can be uniform resource locators or URLs. In addition, after a page or other resource has been retrieved, customer applications can provide access to other pages or records when the customer “clicks” links to other resources. For example, such hyperlinks can be located within the web pages and provide an automated way for the user to enter the URL of another page and retrieve that page. Figure 1 illustrates an exemplary modality of a network system, architecture or infrastructure 100 (hereinafter referred to as a network system 100) that can implement the back-end functions of the social network system 20 illustrated in figure 3. In particular embodiments, the network system 100 allows users of the network system 100 to interact with each other through social network services provided by the network system 100, as well as with third parties. For example, users on remote user's computing devices (for example, personal computers, netbooks, multimedia devices, cell phones (especially smartphones), etc.) can access network system 100 via network browsers or other applications user's access to network sites, network pages or network applications hosted or accessible, at least in part, by network system 100 to view information, store or update information, communicate information or otherwise interact with other users, third party web sites, web pages or web applications, or other information stored, hosted or accessible by web system 100. In particular modes, web system 100 maintains a graph that includes graph nodes representing users, concepts, topics and other information (data), as well as graph borders that connect or define relationships between graph nodes, as described in more detail. below. Referring to Figures 1 and 5, in particular embodiments, the network system 100 includes one or more data centers 102. For example, the network system 100 may include a plurality of data centers 102 strategically located within several geographic regions to serve users located within the respective regions. In particular embodiments, each data center includes a number of client servers or network 104 (hereinafter client servers 104) that communicate information to and from users of network system 100. For example, users on computing devices on the Distant users can communicate with client 104 servers via load compensators or other suitable systems via any suitable combination of networks and service providers. Client 104 servers can consult the cache system described here in order to retrieve data to generate structured documents to respond to user requests. Each of the client servers 104 communicates with one or more groups or follower distributed cache rings 106 (hereinafter follower cache groups 106). In the illustrated fashion, data center 102 includes three follower cache groups 106 that each serve a subset of network 104 servers. In particular, a follower cache group 106 and client servers 104 that the follower cache group 106 serves are located in close proximity, such as inside a building, room or other centralized location, which reduces the costs associated with the infrastructure (for example, wires or other communication lines, etc.), as well as the latency between the servers of client 104 and the group of cache nodes follower respective server 106. However, in some embodiments, although each of the groups of follower cache 106 and the servers of client 104 that they respectively serve, may be located within from a centralized location, each of the follower cache groups 106 and respective client servers 104 that the follower cache groups 106 serve respectively, can be located in one the different location than the other follower cache groups 106 and respective client servers 104 of a given data center; that is, the follower hunt groups 106 (and the respective client servers 104 that the groups serve) from a given data center in a given region can be spread across all of the various locations within the region. In particular embodiments, each data center 102 further includes a leading cache group 108 that communicates information between the follower cache groups 106 of a given data center 102 and a persistent storage database 110 of the data center. data 102. In particular modalities, database 110 is a relational database. In particular modalities, the leading cache group 108 may include an operative auxiliary program (plug-in) to interoperate with any suitable implementation of the bank database 110. For example, database 110 can be implemented as a dynamically variable auxiliary program architecture and can use MySQL and / or any suitable relational database management system, such as, for example, HAYSTACK , CASSANDRA, among others. In an implementation, the auxiliary program performs several translation operations, such as translating the stored data — in the cache layer such as nodes and graph borders for queries and commands suitable for a relational database including one or more tables or simple files. In particular modalities, the leading cache group 108 also coordinates write requests to database 110 from follower cache groups 106 and sometimes reading requests from follower cache groups 106 for information — placed cache in the leader cache 108 or (if not cached in leader cache group 108) stored in database 110. In particular modalities, leader cache group 108 still coordinates the synchronization of information stored in follower cache groups 106 of the respective data 102. That is, in particular embodiments, the leading cache group 108 of a given data center 102 is configured to maintain cache consistency (for example, information in the cache) between the follower cache groups 106 from data center 102, to maintain cache consistency between follower cache groups 106 and leader cache group 108 and to store the cache information in leader cache group 108 within database 110. In one implementation, a leading cache group 108 and a follower cache group 106 can be considered a cache layer between client servers 104 and database 110. In one implementation, the cache layer it is a direct write / direct read cache layer, where all reads and writes go through the cache layer. In an implementation, the cache layer maintains association information and thus can handle queries for that information. Other queries are traversed to the —database 110 for execution. Database 110 generally connotes a database system that can itself include other layers of cache to handle other types of queries. Each follower cache group 106 can include a plurality of follower cache nodes 112, each of which may be running on an individual computer, computing system or server. However, as described above, each of the follower cache nodes 112 of a given follower cache group 106 can be located within a centralized location. Similarly, each leading cache group 108 may include a plurality of leading cache nodes 114, each of which may be running on an individual computer, computing system or server. Similar to the follower cache nodes 112 of a given follower cache group 106, each of the leading cache nodes 114 of a given leader cache group 108 can be located within a centralized location. For example, each data center 102 can include dozens, hundreds, or thousands of client servers 104, and each follower cache group 106 can include dozens, hundreds, or thousands of follower cache nodes 112 that serve a subset of the server's servers. client 104. Similarly, each leader cache group 108 can include dozens, hundreds or thousands of leader cache nodes 114. In particular embodiments, each of the follower cache nodes 112 within a given follower cache group 106 can only communicate with the other follower cache nodes 112 within the private follower cache group 106, client servers 104 served by the private follower cache group 106 and the leading cache nodes 114 within the leading cache group 108. In particular embodiments , the information stored by network system 100 is stored within each data center 102 both within database 110, as well as within each of the follower and leader cache groups 106 and 108, respectively. In particular modalities, the information stored within each database 110 is stored in a relational manner (for example, as waste and tables via MySQL), while the same information is stored within each of the following 106 and decache groups. leader cache group 108 in various data fragments stored by each of the follower and leader cache groups 106 and 108, respectively, in the form of a graph including graph nodes and associations or connections between nodes (mentioned here as chart borders). In particular embodiments, the data fragments of each of the follower cache groups 106 and leader cache group 108 are deposited or divided between cache nodes 112 or 114 within the respective cache group. That is, each of the cache nodes 112 or 114 within the respective cache group stores a subset of the fragments stored by the group (and each fragment set stored by each of the follower and leader cache groups 106 and 108, respectively , stores the same information, since the leading cache group synchronizes the fragments stored by each of the cache groups of a given data center 102 and, in some ways, between data centers 102). In particular modalities, each graph node is assigned a unique identifier (ID) (hereinafter referred to as the node ID) that uniquely identifies the graph node in the graph stored by each of the follower and leader cache groups 106 and 108, respectively, and the database 110; that is, each node ID is globally unique. In a deployment, each node ID is a 64-bit identifier. In an implementation, a fragment is allocated to a segment of the node ID space. In particular embodiments, each node ID maps (for example, arithmetically or via some mathematical function) to a unique corresponding fragment ID; that is, each fragment ID is also globally unique and refers to the same data object in each set of fragments stored by each of the follower and leader cache groups 106 and 108, respectively. In other words, all data objects are stored as graph nodes with Unique node IDs and all information stored in the chart in the data fragments of each of the follower and leader cache groups 106 and 108, respectively, is stored in the data fragments of each of the follower and leader cache groups 106 and 108, respectively, using the same corresponding unique fragment IDs. As recently described, in particular modalities, the fragment ID space (the collection of fragment IDs and associated information stored by all fragments in each cache group and duplicated in all other follower cache groups 106 and leader cache group 108) is divided between follower or leader cache nodes 112 and 114, respectively, within follower or leader cache groups 106 and 108, respectively. For example, each follower cache node 112 in a given follower cache group 106 can store a subset of the fragments (for example, tens, hundreds, or thousands of fragments) stored by the respective follower cache group 106 and each fragment is assigned a range of node IDs for which to store information, including information about the nodes whose respective node IDs map to the fragment IDs in the fragment ID range stored by the particular fragment. Similarly, each leading cache node 114 in the leading cache group 108 can store a subset of the fragments (for example, tens, hundreds or thousands of fragments) stored by the respective leading cache group 108 and for each fragment a range of IDs is assigned the node for which to store the information, including information about the nodes whose respective node IDs map to the fragment IDs in the range of fragment IDs stored by the particular fragment. However, as described above, a given fragment ID corresponds to the same data objects stored by the follower and leader cache groups 106 and 108, respectively. How the number of follower cache nodes 106 within each follower cache group 106 and the number of leader cache nodes 114 within leader cache group 108 can vary statically (for example, follower cache groups 106 and the group of leader cache 108 can generally include different numbers of follower cache nodes 112 and leader cache nodes 114, respectively) or dynamically (for example, cache nodes within a given cache group can be suspended for various reasons periodically or when required for fixing, updating or maintenance), the number of fragments stored by each of the follower cache nodes 112 and leader cache nodes 114 can vary statically or dynamically within each cache group, as well as between groups of cache. Furthermore, the range of fragment IDs assigned to each fragment can also vary statically or dynamically. In particular embodiments, each of the follower cache nodes 112 and leader cache nodes 114 includes graph management software that manages the storage and provision of the cached information within the respective cache node. In particular embodiments, the graph management software running on each of the cache nodes in a given cache group can communicate to determine which fragments (and corresponding fragment IDs) are stored by each of the cache nodes within the group. respective cache. In addition, if the cache node is a —decache follower 112 node, the graph management software running on the follower cache node 112 receives requests (for example, write or read requests) from client 104 servers, provides the requests retrieving, updating, deleting or storing information within the appropriate fragment within the follower cache node and manage or facilitate communication between the follower cache node 112 and other follower cache nodes 112 of the respective follower cache group 106 , as well as communication between follower cache node 112 and leader cache nodes 114 of leader cache group 108. Similarly, if the cache node is a leader cache node 114, the graph management software is working on leader cache node 114 manages communication between leader cache node 114 and follower cache nodes 112 of follower cache groups 106 and other leader cache nodes 114 of leader cache group 108, as well as communication between the leading cache node 114 and the database 110. The graph management software working on each of the cache nodes 112 and 114 understands that it is storing and providing information in the form of a graph. In particular modalities, the graph management software on each follower cache node 112 is also responsible for maintaining a table that it shares with the other cache nodes 112 of the respective follower cache group 106, the leader cache nodes 114 of the leading cache group 108, as well as the client servers 104 that the respective follower cache group 106 serves. This table presents a mapping of each fragment ID to the particular cache node 112 in a given follower cache group — which stores the fragment ID and the information associated with the fragment ID. In this way, client servers 104 served by a particular follower cache group 106 know which of the follower cache nodes 112 within follower cache group 106 retain the fragment ID associated with the information that client server 104 is trying to access, add or update (for example, a client server 104 can send write or read requests to the private follower cache node 112 which stores or will store the information associated with a particular fragment ID after using the mapping table to determine which of the follower cache nodes 112 is assigned and stores the fragment ID). Similarly, in particular modalities, the graph management software on each leading cache node 114 is also responsible for maintaining a table that it shares with the other cache nodes 114 of the respective leader cache group 108, as well as the follower cache nodes 112 of the follower cache groups 106 that the leading cache group 108 manages. Furthermore, in this way, each follower cache node 112 in a given follower cache group 106 knows which of the other follower cache nodes 112 in the given follower cache group 106 stores which fragment IDs are stored by the respective cache group. follower 106. Similarly, in this way, each leading cache node 114 in the leading cache group 108 knows which of the other leading cache nodes 114 in the leading cache group 108 stores which fragment IDs are stored by the leading cache group 108. In addition moreover, each follower cache node 112 in a given follower cache group 106 knows which of the leading cache nodes 114 in the leading cache group 108 stores which fragment IDs. Similarly, each leader cache node 114 in the leader cache group 108 knows which of the follower cache nodes 112 in each of the follower node groups 106 stores which fragment IDs. In particular modalities, the information about each node in the graph and, in particular exemplary modalities, a social graph, is stored in a respective fragment of each of the follower cache groups 106 and leader cache group 108 based on in your snippet ID. Each node in the chart, as discussed above, has a node ID. Along with fragment fragment, the respective cache node 112 or 114 can store a node type parameter identifying a node type, as well as one or more name-value pairs (such as content (for example, text, media or URLs for media or other resources)) and metadata (for example, a time stamp when the node was created or modified). In particular modalities, each edge in the graph, and in particular exemplary modalities a social graph, is stored with each node to which the edge is connected. For example, most edges are bidirectional; that is, most borders each connect two nodes in the graph. In particular modes, each edge is stored in the same fragment with each node that the edge connects. For example, a border connecting the node ID1 to the node ID2 can be stored with the fragment ID corresponding to the node ID1 (for example, fragment ID1) and the fragment ID corresponding to the node ID2 (for example , Fragment ID2), which can be in different fragments or even different cache nodes of a given cache group. For example, the border can be stored with the fragment ID1 in the form of (node ID1, border type, node ID2) where the border type indicates the border type. The border can also include metadata (for example, a timestamp indicating when the border was created or modified). The border can also be cached with the fragment ID2 in the form of (node ID1, border type, node ID2). For example, when a user of the social networking system 100 establishes a contact relationship with another user or a fan relationship with a concept or user, the “friend” or “fan” type border relationship can be stored in two fragments, a first shared corresponding to the fragment to which the user's identifier is mapped and a second fragment to which the object identifier of the other user or concept is mapped. Networking 100, and particularly the graph management software running on follower cache nodes 112 of follower cache groups 106 and leader cache nodes 114 of leader cache group 108, supports multiple queries received from server client 104, as well as to or from other follower or leader cache nodes 112 and 114, respectively. For example, the object add query (ID1, node type 1, metadata (not always specified), payload not always specified)) causes the receiving cache node to store a new node as the ID1 of the node specified in the query. node type 1 specified in the fragment to which the node ID1 corresponds. The receiving cache node also stores metadata (for example, a timestamp) and payload (for example, name-value pairs and / or content, such as text, media, resources, or references) with node ID1 resources), if specified. As another example, the query object update 11ID1, node type 1 (not always specified), metadata (not always specified), payload (not always specified)) causes the receiving cache node to update the identified node. by the ID1 of the node specified in the query (for example, changing the node type to the type of node1 specified in the query, updating the metadata with the metadata specified in the query, or updating the content stored with the payload specified in the query) in the corresponding fragment . As another example, the object delete query (node ID1) causes the receiving cache node to delete the node identified by the ID1 of the node specified in the query. As another example, the object get query (node ID1) causes the receiving cache node to retrieve the content stored with the node identified by the node ID1 specified in the query. Now with reference to edge queries (as opposed to the node queries just described), the assoc add query (ID1, edge type 1, ID2, metadata (not always specified)) causes the receiving cache node (which stores the node ID1) create a border between the node identified by the node ID1 and the node identified by the ID2 of the border type border type 1 and store the border with the node identified by the node ID1 together with the metadata ( for example, a time stamp indicating when the border was requested) if specified. As another example, the assoc update query (node ID1, border type 1, node ID2, metadata (not always specified) causes the receiving cache node (which stores the node's olD1) to update the border between the identified node by the node ID1 and the node identified by the node ID2. As another example, the assoc delete query (node ID1, edge type 1 (not always specified), node ID2) causes the receiving cache node (which stores the node ID1) exclude the border between the node identified by the node ID1 and the node identified by the node ID2. As another example, the assoc get query (node ID1, border type 1, classification (not always specified), start (not always specified), limit (not always specified)) causes the receiving cache node (which stores the node ID1) to return the node IDs of the connected nodes on the node identified by ID1 of the node by the edges of the edge type 1. Additionally, if specified, the specific classification key is a filter. sorting specifies a time stamp, the receiving cache node (which stores the node ID1) returns the node IDs of the nodes connected to the node identified by the node ID1 by the edges of edge type 1 that were created between the time value specified by the start parameter and the time value specified by the limit parameter. As another example, the assoc exists query (node ID1, edge type 1, list of other node IDs, classification key (not always specified), start (not always specified), limit (not always specified)) causes that the receiving cache node (which stores the node ID1) returns the node IDs of the nodes specified in the list of other node IDs connected to the node identified by the fragment ID1 by the edges of edge type 1. In addition , the queries described above can be sent as described and used to update leader cache nodes 114. In one implementation, the cache layer implemented by follower and leader cache groups 108 and 106 keeps association data in a cache or more, in a way that supports high consultation rates for one or more types of consultation. In some implementations, the invention facilitates efficient intersection, association and filtering queries directed at associations between nodes in the graph. For example, in an implementation, the cache layer caches information in an optimized way to handle point search, range and counting queries for a variety of associations between nodes. For example, when building a page, a client server 104 can issue a query to all friends of a given user. The client server 104 can issue an assoc get query identifying the user and the “friend” border type. To facilitate query handling, a cache node in the cache layer can store associations of a given type (such as "friends", "fans", "members", "like", etc.) between a first node (for example, a node corresponding to a user) and a node corresponding to a user's contacts or friends. In addition, to build another part of the page, a client server 104 can issue a query for the last N set of wall posts in the profile, issuing an assoc get query identifying the user or user profile, the type of border “Wall post” and a limit value. Similarly, comments for a particular wall post can be retrieved in a similar way. In one implementation, the cache layer implemented by the follower cache groups 106 and the leader cache groups maintains a set of structures in memory for associations between the nodes (id1, id2) on the graph that facilitates quick search and handles high data rates. Query. For example, for each association set (id1, type) (a set of all associations that originate at id1 and have a given type), the cache layer maintains two indexes in memory. As discussed above, these association sets actions are maintained by cache nodes in each group based on the fragment on which id1 is located. In addition, given the structure discussed below, a given association between two nodes can be stored in two association sets, each directed to the respective association nodes. A first index is based on a time attribute (for example, time stamps) and supports range queries. A second index by id2 does not support range queries, but it does support better insertion and search time complexity. In an implementation, the first index is an orderly dynamic formation of association entries stored in circular temporary storage. Each entry in the circular temporary storage describes or corresponds with an association and contains the following W fields: a) $ indicators (1 byte) (indicating the visibility of an association); b) $ id2 (8 bytes); c) time $ (4 bytes); d) $ data (8 bytes) ($ data is an 8-byte field of fixed size (when more than 8 bytes are required for $ data, this becomes a pointer to another piece of memory to maintain the full value) - data $; data S is optional for a given type of association) and e) link S (8 bytes) moves the previous and previous entries in the same index deposit as id2 (see below). In an implementation, training is ordered by the rise of the time $ attribute. The number of entries in the index is equaled (such as 10,000) and configurable by the type of association. When the limit is reached, training resumes cyclically. Because the formation is classified by time $, most new entries will be attached at the end without displacing any of the existing elements. In an implementation, the primary index can be stored in a single cache memory key that can be searched by name (“assoc: <id1>: <type>") using a global cache memory scatter table. can be faced with a header containing the following fields: a) count (4 bytes): the count of associations visible in the association set (id1, type) (persistently stored, not just the cached entries in the index) ; b) initial data (4 bytes): the displacement of the byte of the initial data of the formation (element that ranks highest) in the circular temporary storage; c) final data (4 bytes): the displacement of the byte of the final data of the tion (lowest ranking element) in circular temporary storage and d) index pointer of id2 (8 bytes): a pointer to a block containing an id2 scatter table. The second index ($ id2) is implemented, in a modality, as a scatter table and supports fast insertions and searches for a given association ($ id1, type $, $ id2). The scatter table itself, in an implementation, can be stored in a separate block allocated with the memory distributor in cache memory. The table is a formation of displacements for the primary index, each identifying the first element in the corresponding proof deposit. The elements are linked in a deposit through its $ connection fields. Storing the scatter table in a separate block allows executors to resize the table and the primary index independently, thereby reducing the amount of memory copied as the association set grows. Connecting the association's inputs to deposits in place also improves memory efficiency. The scatter table (and deposit lists) may need to be reconstructed when the marked hidden or deleted entries are deleted from the Index, but this can be done rarely. Thus, when a new association of the same <type> is added, a cache node 112, 114 adds the newly associated object to the scatter table and circular buffer, removing the oldest entry from circular buffer. As discussed above, the value of the <sort key> can be used to sort corresponding entries based on the attribute, such as time stamps. In addition, a <limit> value limits the number of results returned to the first N values, where N = <limit>. This setting allows you to provide queries regarding associations between nodes at a very high query rate. For example, a first consultation may request the display of a set of friends in a section of a web page. A cache node can respond quickly to a get assoc query (id1, type, sort key, limit) by searching the association set corresponding to id1 by accessing the primary index and retrieving the first N entries from id2 (where N = limit) in circular temporary storage. In addition, the secondary index dispersion table facilitates one-off searches. In addition, the count value maintained by the cache layer facilitates quick responses for counting a given association set (id1, type). Some general examples of data storage and provisioning will now be described (more specific examples related to particular exemplary implementations of a social graph will be described later after the particular exemplary implementations of the social graph are described). For example, when a server of client 104 receives a request for a web page, such as from a user of networking system 100 or another server, component, application or process of networking system 100 (for example, in response to a user request), client server 104 may need to issue one or more queries in order to generate the requested network folder. In addition, as the user interacts with networking 100, client server 104 can receive requests that establish or modify object nodes and / or associations with object nodes. In some cases, the request received by a server from client 104 usually includes the node ID representing the user on whose behalf the request to the server from client 104 was made. The request may also, or alternately, include one or more other node IDs corresponding to objects that the user may wish to view, update use, delete or connect or associate (with a border). For example, a request can be a read request to access information associated with the object or objects the user wants to see (for example, one or more objects to provide a web page). For example, the read request can be a request for stored content for a particular node. For example, a wall post in a user's profile can be represented as a node with a “wall post” border type. Comments on the wall post can also be represented as nodes in the graph with “comment” associations of the border type with the wall post. In such an example, in particular modalities, the client server 104 determines the fragment ID corresponding to the object's node ID (node) that includes the content or other requested information, uses the mapping table to determine which of the nodes follower cache 112 (in follower cache group 106 that supplies client server 104) stores the fragment ID and transmits a query including the fragment ID to the particular follower cache nodes 112 storing the information associated with and stored with the Fragment ID. The private cache node 112 then retrieves the requested information (if cached within the corresponding fragment) and transmits the information to the requesting client's server 104, which can then supply the information to the requesting user (for example , in the form of HTML or another structured document that can be rendered by the web browser or another document rendering application running on the user's computing device). If the requested information is not stored / cached within the follower cache node 112, the follower cache node 112 can then determine, using the mapping table, which of the leading cache nodes 114 stores the fragment by storing the ID fragment and sends the query to the particular leader cache node 114 which stores the fragment ID. If the requested information is cached within the particular leading cache node 114, the leading cache node 114 can then retrieve the requested information and send it to the following cache node 112, which then updates the particular fragment on the follower cache node 112 to store the requested information with the fragment ID and proceeds to provide the query as just described to client server 104, which can then provide the information to the requesting user. If the requested information is not cached within the leading cache node 114, the leading cache node 114 can then translate the query into the language of database 110 and transmit the new query to database 110, which then retrieves the requested information and transmits the requested information to the particular leading cache node 114. The leading cache node 114 can then translate the retrieved information back into the graphical language understood by the software management software. graph, update the particular fragment on the leading cache node 114 to store the requested information with the fragment ID and transmit the retrieved information to the private follower cache node 112, which then updates the particular fragment on the follower cache node 112 to store the requested information with the fragment ID and proceeds to provide the query as just described for client server 104, which can then provide the information to the sol user striking. As another example, the user's request can be a recording request to update existing information or to store additional information for a node or to create or modify a border between two nodes. In the first case, if the information to be stored is for a non-existent node, the client server 104 receiving the user's request transmits a request for a node ID to a new node to the respective follower cache group 106 that supplies the client server 104. In some cases or modalities, the client server 104 may specify a particular fragment within which the new node must be stored (for example, to put the new node together with another node). In such a case, client server 104 requests a new node ID for the particular follower cache node 112 storing the specified fragment. Alternatively, client server 104 can pass a node ID from an existing node with the request for a new node ID to the follower cache node 112 storing the fragment that stores the passed node ID to cause the follower cache node 112 responds to client server 104 with a node ID for the new node that is in the range of node IDs stored in the fragment. In other cases or modalities, the client server 104 may select (for example, randomly or based on some function) a particular follower cache node 112 or a particular fragment to send the new request for the node ID. Whatever the case, private cache node 112, or more particularly the graph management software running on follower cache node 112, then transmits the new node ID to client server 104. Client server 104 can then formulate a write request that includes the new node ID for the corresponding follower cache node 112. The write request can also specify a node type for the new node and include a payload (for example, content to be stored with the new node) and / or metadata (for example, the user's node ID doing the request, a time stamp indicating when the request was received by the client server 104, among other data) to be stored with the node ID. For example, the write request sent to the follower cache node 112 can be of the form object add (node ID, node type, payload, metadata). Similarly, to update a node, client server 104 can send a recording request in the form object modify (node ID, node type, payload, metadata) to the follower cache number 112 storing the fragment within which the Node ID is stored. Similarly, to exclude a node, client server 104 can send a request object delete (node ID) to the follower cache node 112 storing the fragment within which the fragment ID is stored. In particular embodiments, the follower cache node then transmits the request to the leading cache node 114 by storing the fragment that stores the corresponding node ID, so that the leading cache node 114 can then update the fragment. The leading cache node 114 then translates the request into the language of database 110 and transmits the translated request to database 110, so that the database can then be updated. Figure 4 illustrates an exemplary method for processing a request to add an association (assoc add) between two nodes. As figure 4 illustrates, when a follower cache node 112 receives an assoc add request (for example, assoc addí (id1, type, id2, metadata), it accesses an index to identify the association set object corresponding to id1 and type (402). Follower cache node 112 adds id2 to both the scatter table and the circular buffer of the association set object and increases the count value of the association set object (404). The association set object now maintains the new association of the given type between the node id1 and the node id2.To facilitate the search for the association related to id2, the following cache node 112 identifies the fragment Id corresponding with the node identifier id2 and sends the assoc add request to the follower cache node 112 in the group that handles the identified fragment (406) .If the instant follower cache node 112 handles the fragment, it processes the request assoc add In an implementation, the next dispatcher cache node 112 can transmit a modified assoc add request that signals that this is a necessary update to establish a bidirectional association at the cache layer. The follower cache node 112 also sends the assoc add request to the —leader cache node 114 corresponding to the fragment on which id1 is located (408). Leading cache node 114 can perform a similar process to establish a bidirectional association in the leading cache group. Leading cache node 114 also causes the new association to persist in database 110. In this way, an association between node id1 and node id2 is now searchable in an index with reference to id1 and type, and separates - - duly, id2 and type. In particular modalities, the graph can maintain a variety of different types of nodes, such as users, pages, events, wall posts, comments, photographs, videos, background information, concepts, interests and any other element that is useful to represent as a knot. Border types correspond with inter-member associations and may include friends, followers, subscribers, fans, the like (or other indications of interest), wall posting, comment, links, suggestions, recommendations and other types of associations between us . In an implementation, a portion of the graph fic can be a social graph including user nodes where each corresponds with a respective user from the social network environment. The social graph can also include other nodes, such as concept nodes, each devoted or directed to a particular concept, as well as topic nodes, which may or may not be ephemeral, each devoted or directed to a particular topic of current interest among users of the social network environment. In particular modalities, each node has, represents or is represented by, a corresponding network page (“profile page”) hosted or accessible in the social network environment. For example, a user node can have a corresponding user profile page on which the corresponding user can add content, make statements and otherwise express themselves. By way of example, as will be described below, several network pages hosted or accessible in the social network environment, such as, for example, user profile pages, concept profile pages or topic profile pages, enable users to users post content, post status updates, post messages, post comments including comments on other posts submitted by the user or other users, declare interests, declare a “similar” (described below) for any of the above posts mentioned, as well as specific pages and content, or otherwise express or perform various actions (hereinafter these and other user actions may be collectively referred to as “posts” or “user actions”). In some embodiments, the post may include a link to, or otherwise reference to, additional content, such as media content (for example, photos, videos, music, text, etc.), uniform resource locators (URLs) and others us, via their respective profile pages, other user profile pages, concept profile pages, topic pages or other network pages or network applications. Such posts, statements or actions can then be made visible by the author user, as well as other users. In particular modalities, the social graph still includes a plurality of borders that each define or represent a connection between a corresponding pair of nodes on the social graph. As discussed above, each content item can be a node in the graph. I stay connected to other nodes. As just described, in several exemplary embodiments, one or more of the described web pages or network applications are associated with a social networking environment or social networking service. As used here, a “user” can be an individual (human user), an entity (for example, a company, trade or third party application) or a group (for example, of individuals or entities) that interact or interact. communicates with or through such a social networking environment. As used here, a “registered user” refers to a user who has officially registered within the social networking environment (generally, the users and user nodes described here refer to registered users only, although this is not necessarily a requirement in other modalities, that is, in other other modalities, users and user nodes described here can refer to users who have not registered with the social network environment described here). In particular modalities, each user has a corresponding “profile” page stored, hosted or accessible by the social network environment and visible to all or a selected subset of other users. In general, a user has administrative rights to all or a portion of his or her respective profile page, as well as potentially other pages created by or for the particular user including, for example, home pages, pages hosting network applications, among others. other possibilities. As used here, an “authenticated user” refers to a user who has been authenticated by the social network environment as the claimed user on a corresponding profile page to which the user has administrative rights or, alternatively, a trusted representative. - the name of the claimed user. A connection between two users or concepts can represent a defined relationship between users or concepts of the social network environment and can be defined logically in an appropriate data structure of the social network environment as an edge between the nodes corresponding to the users, concepts , events or other nodes in the social networking environment for which the association was made. As used here, a “friendship” represents an association, such as a defined social relationship, between a pair of users in the social network environment. A “friend”, as used here, can refer to any user in the social network environment with which another user has formed a connection, friendship, association or relationship with, causing an edge to be generated between the two users. By way of example, two registered users can become friends with each other explicitly such as, for example, by one of the two users selecting the other for friendship as a result of transmitting, or causing the transmission, a friend request with the another user, who can then accept or deny the request. Alternatively, friendships or other connections can be established automatically. Such social friendship may be visible to other users, especially those where they themselves are friends with one or both registered users. A friend of a registered user may also have more privileges to access the content, especially content generated or declared by the user, in the profile or another page of the registered user. It should be noted, however, that two users who have a friendship connection established between them on the social graph may not necessarily be friends (in the conventional sense) in real life (outside the social network environment). For example, in some implementations, a user may be a business entity or a non-human entity and thus unable to be a friend to a user in the traditional sense of the word. As used here, a “fan” can refer to a user who is a supporter or follower of a particular user, web page, web application or other web content accessible in the social networking environment. In particular ways, when a user is a fan of a particular web page (“admires” the particular web page), the user can be listed on that page as a fan for other registered users or the general public to see. Additionally, an avatar or profile picture of the user can be shown on the page (or within / on any of the pages described below). As used here, a “like” can refer to something, such as, by way of example and not by way of limitation, a post, a comment, an interest, a link, a piece of media (for example , photo, photo album, video, music, etc.), a concept, an entity or a page, among other possibilities (in some implementations a user can claim to declare a similar to or for virtually anything on any page hosted by or accessible by the system or social networking environment) that a user, and particularly a registered or authenticated user, has stated or otherwise demonstrated that he or she likes, is a fan of, supports, enjoys or otherwise has a positive outlook. In a modality, indicating or declaring a “similar” or indicating or declaring that the user is a “fan” of something can be processed and defined equivalently in the social network environment or can be used interchangeably; similarly, declaring yourself a “fan” of something, such as a concept or profile page of the concept, or declaring that you “like” the thing, can be defined equivalently in the social network environment and used interchangeably here. In addition, as used here, an “interest” can refer to the interest declared by the user, such as an interest declared by the user presented on the user's profile page. As used here, a “wish” can refer to virtually anything that a user wants. As described above, a “concept” can refer to virtually anything that a user can declare or otherwise demonstrate an interest, a taste for or a relationship with, such as, for example, a sport , a sports team, a genre of music, a musical composer, a hobby, a business (company), an entity, a group, a celebrity, a person who is not a registered user or even an event, in some ways - des, another user (for example, an unauthenticated user), etc. For example, there may be a concept node and concept profile page for “Jerry Rice”, the famous professional football player, created and managed by one or more of a plurality of users (for example, different from Jerry Rice ), while the social graph additionally includes a user node and user profile page for Jerry Rice created by and admin. by Jerry Rice himself (or Jerry Rice's trusted or authorized representatives). Figure 5 illustrates a distributed redundant system. In the implementation shown, the distributed redundant system includes at least first and second data centers 102a, 102b. Each of the data centers 102a, 102b includes one or more follower cache groups 106 and a leading cache group 108a, 108b. In one implementation, the leading cache group 108a acts as a primary (master) cache group, while the leading cache group 108b is a secondary (slave) cache group. In an implementation, data centers 102a, 102b are redundant in the sense that synchronization functions are used to obtain duplicate copies of database 110. In an implementation, data center 102a may be physically located in a geographic region (such as the west coast of the United States) to provide traffic from that region, while data center 102b may be physically located in another geographic region (such as the east coast of the United States). Since users in any of these regions can access the same data and associations, efficient synchronization mechanisms are desired. Figure 6 illustrates an exemplary method of how a leading cache node 114 processes write commands. As discussed above and with reference to figure 5, a follower cache node 112 can receive a write command to add / update an object or association from a client server 104 (figure 5, No. 1). Follower cache node 112 sends recording command to a corresponding leader cache node 114 (figure 5, No. 2). When the leading cache node 114 receives a write command from a following cache node (602), it processes the write command to update one or more entries in the cache maintained by the leading cache group 108a (604) and writes the update in the persistent database 110a (606) (figure 5, No. 3). Lead cache node 114 also recognizes the write command (ACK) for follower cache node 112 and broadcasts the update to other follower cache groups 106 of data center 102a (Figure 5, No. 4a) and the secondary leader cache 108b, which sends the update to its follower cache groups 106 (figure 5, No. 4b) (608). As figure 6 illustrates, the leading cache node 114 also adds the update to a duplicate record (610). Databases 110a, 110b implement a synchronization mechanism, such as MySQL duplication, to synchronize persistent databases. Figure 7 illustrates a message flow according to an implementation of the invention. When a write command is received at a follower cache node 112 in a ring 106 that is not directly associated with the primary leader cache group 108a (figure 7, No.1), the follower cache number 112 sends the recording message to the primary leader cache group 108a for processing (figure 7, No. 2). A leader cache node 114 in the primary leader cache group 108a can then broadcast the update to its follower cache groups 106 (figure 7, No. 3) and record the changes to database 110a. As figure 7 shows, the follower cache node 112 that received the save command can also send the save command to its secondary leader cache group 108b (figure 7, No. 5), which broadcasts updates for other follower cache groups 106 (figure 7, No. 5). The preceding architecture therefore allows changes in the cache data are quickly duplicated across data centers, while separate duplication between databases 110a, 110b allows for data security. The applications or processes described here can be implemented as a series of computer-readable instructions, personified or encoded in or within a tangible data storage medium, which when executed are operable to make one or more processors implement the operations described above. While the foregoing processes and mechanisms can be implemented by a wide variety of physical systems and in a wide variety of network and computing environments, the computing systems described below provide exemplary server computing system architectures and server systems. client described above, for teaching purposes, rather than limiting. Figure 2 illustrates an exemplary computing system architecture, which can be used to implement a server 22a, 22b. In one embodiment, the hardware system 1000 comprises a processor 1002, a cache memory 1004 and one or more executable modules and drivers, stored in a tangible computer-readable medium, directed to the functions described here. In addition, the hardware system 1000 includes a high performance input / output (1 / O) bus 1006 and a standard bus / O 1008. A host bridge 1010 couples the processor 1002 to the high performance 1 / O bus 1006 , while the 1 / O bus bridge 1012 couples the two buses 1006 and 1008 together. System memory 1014 and one or more communication / network interfaces 1016 dock on bus 1006. Hardware system 1000 may also include video memory (not shown) and a monitor device attached to the video memory. Mass storage 1018 and 1 / O ports 1020 connect to bus 1008. Hardware system 1000 can optionally include a keyboard and pointer device and a monitor device (not shown) attached to bus 1008. Collectively, these elements are designed to represent a broad category of computer hardware systems, including, but not limited to general purpose computer systems based on x86 compatible processors manufactured by Intel Corporation of Santa Clara, California and processors x86 compatible devices manufactured by Advanced Micro Devices (AMD), Inc., of Sunnyvale, California, as well as any other suitable processor. The elements of the hardware system 1000 are described in more detail below. In particular, the network interface 1016 provides communication between hardware system 1000 and any one of a wide range of networks, such as an Ethernet network (for example, IEEE 802.3), a bus, etc. Storage mass storage 1018 provides permanent storage for data and programming instructions to execute the functions described above implemented on servers 22a, 22b, while system memory 1014 (for example, DRAM) provides temporary storage for data and programming instructions when executed by processor 1002. Ports 1 / O 620 are one or more serial and / or parallel communication ports that provide communication between additional peripheral devices, which can be coupled to the 1000 hardware system The hardware system 1000 can include a variety of system architectures and various components of the hardware system 1000 can be reorganized. For example, cache 1004 can be on the chip with processor 1002. Alternatively, cache 1004 and processor 1002 can be packaged together as a “processor module”, with processor 1002 being called as the “Processor core”. In addition, certain embodiments of the present invention may not require or include all of the above components. For example, the peripheral devices shown coupled to the standard 1 / O bus 1008 can be coupled to the high performance 1 / O bus 1006. In addition, in some embodiments, only a single bus can exist, with the system components hardware 1000 being coupled to the single bus. In addition, the hardware system 1000 may include additional components, such as additional processors, storage devices or memories. In an implementation, the operations of the modalities described here are implemented as a series of executable modules executed by the hardware system 1000, individually or collectively in a distributed computing environment. In a particular mode, a set of software modules and / or triggers implement a network communications protocol stack, navigation and other computing functions, optimization processes, and so on. The preceding functional modules can be made by hardware, executable modules stored in a readable medium - by computer or a combination of both. For example, function modules can comprise a plurality or series of instructions to be executed by a processor in a hardware system, such as processor 1002. Initially, the series of instructions can be stored in a storage device, such as as mass storage 1018. However, the instruction series can be stored in a tangible way in any suitable storage medium, such as a floppy disk, CD-ROM, ROM, EEPROM, etc. In addition, the instruction series does not need to be stored locally and could be received from a remote storage device, such as the server on a network, via the 1016 network / communications interface. Instructions are copied from the storage device, such as 1018 mass storage, on memory 1014 and - then accessed and executed by processor 1002. An operating system manages and controls the operation of the hardware system 1000, including input and output of data to and from software applications (do not show trades). The operating system provides an interface between the software applications running on the system and the system's hardware components. Any suitable operating system can be used, such as the LINUX operating system, the Apple Macintosh operating system, available from Apple Computer Inc. of Cupertino, California, UNIX operating systems, Microsoft & WindowsG operating systems, BSD operating systems and so on. against. Of course, other implementations are possible. For example, the nickname generation functions described here can be implemented in firmware or in an application-specific integrated circuit. In addition, the elements and operations described above can be comprised of instructions that are stored in storage media. Instructions can be retrieved and executed by a processing system. Some example instructions are software, program code and firmware. Some examples of storage media are memory devices, tape, disks, integrated circuits and servers. The instructions are operational when executed by the processing system to direct the processing system to operate according to the invention. The term “processing system” refers to a single processing device or a group of interoperable processing devices. Some examples of processing devices are integrated circuits and logic circuitry. Those skilled in the art are familiar with instructions, computers and storage media. The present disclosure covers all changes, substitutions, variations, alterations and modifications in the exemplary modalities here that a person having knowledge of the technique would understand. Similarly, where appropriate, the appended claims cover all changes, substitutions, variations, alterations and modifications to the exemplary modalities here that a person having knowledge of the technique would understand. By way of example, although modalities of the present invention have been described as operating in conjunction with a social networking site, the present invention can be used in conjunction with any communications facility that supports network applications and models the data as a associations graph. In addition, in some modalities, the term “network service” and “network sites” can be used interchangeably and additionally can refer to a custom or generalized API on a device, such as a mobile device ( e.g. cell phone, smartphone, personal GPS, personal digital assistant, personal gaming device, etc.) that makes API calls directly to a server.
权利要求:
Claims (10) [1] 1. Method, CHARACTERIZED for understanding: receiving, in a first cache node, a request to add an association between a first node and a second node, in which the first node is identified by a first identifier in a first range of identifiers allocated to the first node of a group (cluster) and where the second node is identified by a second identifier in a second range of identifiers allocated to a second cache node of the group, storing data indicating the association between the first and the second nodes in a memory of the first cache node; and transmitting a message to the second cache node, where the message is operative to cause the second cache node to add the association between the first and second nodes in a memory of the second cache node. [2] 2. Method according to claim 1, CHARACTERIZED by the fact that it still comprises identifying a fragment identifier corresponding to the second identifier and in which the second cache node is associated with the identified fragment identifier. [3] 3. Method according to claim 1, CHARACTERIZED by the fact that the message signals to the second cache node that the message contains an update necessary to establish a bidirectional association in the cache layer between the first node and the second node. [4] 4. Method according to claim 1, CHARACTERIZED by the fact that it still comprises sending the message to a leading cache node corresponding with a fragment identifier associated with the first identifier. [5] 5. Method according to claim 1, CHARACTERIZED by the fact that the first and second nodes are contained in a graph structure comprising one to a plurality of node types. [6] 6. Method according to claim 1, CHARACTERIZED by the fact that the association is a type of association identified from a plurality of types of association. [7] 7. Method according to claim 1, CHARACTERIZED by the fact that the request identifies a type of association and in which the method still comprises: keep in memory, for each association set corresponding to the first node of a plurality of nodes and a type of association of a plurality of types of association, a first Index and a second index; wherein the first index comprises an ordered formation of entries, each entry including a second node identifier that is associated with the first node and a classification attribute; wherein the second index comprises a scatter table comprising entries corresponding to the node identifiers of the respective second nodes that are associated with the first node; and access memory against the first type of association and the first node identifier to add the second node identifier to a first index and a second index corresponding to the first type of association and the first node identifier. [8] 8. Operative cache node in a group of cache nodes, CHARACTERIZED by understanding: one or more processors, a memory, a non-transitory storage medium storing computer-readable instructions, the instructions, when executed, operate to make o one or more processors: receive a request to add an association between a first node and a second node, where the first node is identified by a first identifier in a first range of identifiers allocated in a group's cache node and in that the second node is identified by a second identifier in a second range of identifiers allocated in a second cache node of the group, store data indicating the association between the first and second nodes in a cache node memory; and transmit a message to the second cache node, where the message is operative to cause the second cache node to add the association between the first and second nodes in a memory of the second cache node. [9] 9. Cache node according to claim 8, CHARACTERIZED by the fact that the instructions are still operative to make the one or more processors identify a fragment identifier corresponding to the second identifier and where the second cache node is associated with the identified fragment identifier. [10] 10. Non-transient storage medium, CHARACTERIZED by understanding to store computer-readable instructions, the instructions, when executed, operative to make one or more processors: receive, in a first cache node, a request to add a association between a first node and a second node, in which the first node is identified by a first identifier in a first range of identifiers allocated in the first cache node of a group and in which the second node is identified by a second identifier in a second range of identifiers allocated in a second cache node of the group; store data indicating the association between the first and the second nodes in a memory of the first cache node; and transmit a message to the second cache node, where the message is operative to cause the second cache node to add the association between the first and second nodes in a memory of the second cache node.
类似技术:
公开号 | 公开日 | 专利标题 US10268725B2|2019-04-23|Distributed cache for graph data BR112013016926A2|2020-10-27|composite term index for graphic data AU2016200251B2|2016-04-28|Distributed cache for graph data AU2015227480B2|2015-12-03|Distribution cache for graph data
同族专利:
公开号 | 公开日 MX337805B|2016-03-18| US20120173541A1|2012-07-05| EP2659386A2|2013-11-06| JP6028065B2|2016-11-16| US20140330840A1|2014-11-06| CN106372136A|2017-02-01| CA2823187A1|2012-07-05| US8954675B2|2015-02-10| JP6346255B2|2018-06-20| JP2017073162A|2017-04-13| KR101826115B1|2018-03-22| EP3296896B1|2019-05-29| AU2017203364B2|2017-10-19| US9767152B2|2017-09-19| US20140074876A1|2014-03-13| CA2901113A1|2012-07-05| AU2011353036A1|2013-07-18| AU2017203364A1|2017-06-08| JP5745649B2|2015-07-08| CA2974065A1|2012-07-05| WO2012091846A3|2012-08-30| CN103380421B|2016-08-10| US8438364B2|2013-05-07| KR20170073739A|2017-06-28| KR101640185B1|2016-07-18| CA2901113C|2016-01-26| AU2016203589B2|2017-04-20| US10268725B2|2019-04-23| US20160085881A1|2016-03-24| US20120173820A1|2012-07-05| JP6062101B1|2017-01-18| WO2012091846A2|2012-07-05| US20120173845A1|2012-07-05| EP2659386B1|2017-12-20| US8612688B2|2013-12-17| US9886484B2|2018-02-06| KR20160014111A|2016-02-05| US20180157660A1|2018-06-07| KR20130143706A|2013-12-31| MX2013007686A|2013-12-02| KR101592479B1|2016-02-05| CN103380421A|2013-10-30| CA2974065C|2018-09-18| US8832111B2|2014-09-09| CA2964006A1|2012-07-05| CA2911784C|2017-05-30| JP2018133100A|2018-08-23| MX349037B|2017-07-07| CA2964006C|2017-09-05| KR101753766B1|2017-07-19| JP2017068852A|2017-04-06| EP3296896A1|2018-03-21| JP6584575B2|2019-10-02| JP2015167034A|2015-09-24| CA2911784A1|2012-07-05| US20150106359A1|2015-04-16| US9514245B2|2016-12-06| EP2659386A4|2016-07-13| US9208207B2|2015-12-08| AU2011353036B2|2015-08-13| US20170075892A1|2017-03-16| CA2823187C|2015-11-10| CN106372136B|2018-09-11| JP2014501416A|2014-01-20| KR20160083142A|2016-07-11| AU2016203589A1|2016-06-16|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US5448727A|1991-04-30|1995-09-05|Hewlett-Packard Company|Domain based partitioning and reclustering of relations in object-oriented relational database management systems| JP2000020385A|1998-07-07|2000-01-21|Hitachi Ltd|Data retrieving system and data caching method| US6457020B1|2000-03-20|2002-09-24|International Business Machines Corporation|Query optimization using a multi-layered object cache| US6925490B1|2000-05-31|2005-08-02|International Business Machines Corporation|Method, system and program products for controlling system traffic of a clustered computing environment| US6829654B1|2000-06-23|2004-12-07|Cloudshield Technologies, Inc.|Apparatus and method for virtual edge placement of web sites| US6512063B2|2000-10-04|2003-01-28|Dupont Dow Elastomers L.L.C.|Process for producing fluoroelastomers| US6675264B2|2001-05-07|2004-01-06|International Business Machines Corporation|Method and apparatus for improving write performance in a cluster-based file system| WO2003079027A1|2002-03-11|2003-09-25|Meso Scale Technologies, Llc.|System and method for flexibly representing and processing assay plates| US7096213B2|2002-04-08|2006-08-22|Oracle International Corporation|Persistent key-value repository with a pluggable architecture to abstract physical storage| US7099873B2|2002-05-29|2006-08-29|International Business Machines Corporation|Content transcoding in a content distribution network| EP1510938B1|2003-08-29|2014-06-18|Sap Ag|A method of providing a visualisation graph on a computer and a computer for providing a visualisation graph| US7860894B2|2004-05-12|2010-12-28|Oracle International Corporation|Template driven type and mode conversion| US7596571B2|2004-06-30|2009-09-29|Technorati, Inc.|Ecosystem method of aggregation and search and related techniques| US8562633B2|2004-08-02|2013-10-22|W. L. Gore & Associates, Inc.|Tissue repair device with a bioabsorbable support member| WO2006069494A1|2004-12-31|2006-07-06|Intel Corporation|Parallelization of bayesian network structure learning| EP1851662A2|2005-02-24|2007-11-07|Xeround Systems Ltd.|Method and apparatus for distributed data management in a switching network| US20070094214A1|2005-07-15|2007-04-26|Li Eric Q|Parallelization of bayesian network structure learning| US7860897B2|2005-09-30|2010-12-28|International Business Machines Corporation|Optimized method of locating complete aggregation of patient health records in a global domain| US7376796B2|2005-11-01|2008-05-20|Network Appliance, Inc.|Lightweight coherency control protocol for clustered storage system| US8892509B2|2006-03-28|2014-11-18|Oracle America, Inc.|Systems and methods for a distributed in-memory database| US20070230468A1|2006-03-31|2007-10-04|Matsushita Electric Industrial Co., Ltd.|Method to support mobile devices in a peer-to-peer network| US11122009B2|2009-12-01|2021-09-14|Apple Inc.|Systems and methods for identifying geographic locations of social media content collected over social networks| US9129017B2|2009-12-01|2015-09-08|Apple Inc.|System and method for metadata transfer among search entities| US7831772B2|2006-12-12|2010-11-09|Sybase, Inc.|System and methodology providing multiple heterogeneous buffer caches| US8346864B1|2006-12-13|2013-01-01|Qurio Holdings, Inc.|Systems and methods for social network based conferencing| US7555412B2|2007-02-09|2009-06-30|Microsoft Corporation|Communication efficient spatial search in a sensor data web portal| US7920512B2|2007-08-30|2011-04-05|Intermec Ip Corp.|Systems, methods, and devices that dynamically establish a sensor network| US20100241634A1|2007-10-19|2010-09-23|Ajay Madhok|Method and system of ranking transaction channels associated with real world identities, based on their attributes and preferences| EP2056562B1|2007-11-02|2016-09-07|Alcatel Lucent|Resilient service quality in a managed multimedia delivery network| US8468510B1|2008-01-16|2013-06-18|Xilinx, Inc.|Optimization of cache architecture generated from a high-level language description| CN104123240B|2008-01-31|2017-07-28|甲骨文国际公司|system and method for transactional cache| US20090248709A1|2008-03-26|2009-10-01|Oded Fuhrmann|Evaluating Associations Among Entities| US8417698B2|2008-05-06|2013-04-09|Yellowpages.Com Llc|Systems and methods to provide search based on social graphs and affinity groups| CN101673244B|2008-09-09|2011-03-23|上海华虹Nec电子有限公司|Memorizer control method for multi-core or cluster systems| JP2010128713A|2008-11-26|2010-06-10|Ripplex Inc|Server for providing relationship between users using network service| US8433771B1|2009-10-02|2013-04-30|Amazon Technologies, Inc.|Distribution network with forward resource propagation| US8473582B2|2009-12-16|2013-06-25|International Business Machines Corporation|Disconnected file operations in a scalable multi-node file system cache for a remote cluster file system| US9158788B2|2009-12-16|2015-10-13|International Business Machines Corporation|Scalable caching of remote file data in a cluster file system| US8458239B2|2009-12-16|2013-06-04|International Business Machines Corporation|Directory traversal in a scalable multi-node file system cache for a remote cluster file system| US8495250B2|2009-12-16|2013-07-23|International Business Machines Corporation|Asynchronous file operations in a scalable multi-node file system cache for a remote cluster file system| US8769155B2|2010-03-19|2014-07-01|Brocade Communications Systems, Inc.|Techniques for synchronizing application object instances| US8244848B1|2010-04-19|2012-08-14|Facebook, Inc.|Integrated social network environment| US8572129B1|2010-04-19|2013-10-29|Facebook, Inc.|Automatically generating nodes and edges in an integrated social graph| US20120110678A1|2010-10-27|2012-05-03|Sony Ericsson Mobile Communications Ab|Digital Rights Management Domain Recommendation and Selection Based on a User's Social Graphs| US8484191B2|2010-12-16|2013-07-09|Yahoo! Inc.|On-line social search| US8527497B2|2010-12-30|2013-09-03|Facebook, Inc.|Composite term index for graph data| US8832111B2|2010-12-30|2014-09-09|Facebook, Inc.|Distributed cache for graph data| US8744912B2|2011-04-14|2014-06-03|Koozoo Inc.|Method and system for an advanced player in a network of multiple live video sources| US9613339B2|2011-06-30|2017-04-04|International Business Machines Corporation|Information exchange in the social network environment| US8535163B2|2012-01-10|2013-09-17|Zynga Inc.|Low-friction synchronous interaction in multiplayer online game| US20130289991A1|2012-04-30|2013-10-31|International Business Machines Corporation|Application of Voice Tags in a Social Media Context| US9462066B2|2012-08-21|2016-10-04|Facebook, Inc.|Social action by quick response code|US8631411B1|2009-07-21|2014-01-14|The Research Foundation For The State University Of New York|Energy aware processing load distribution system and method| US9633121B2|2010-04-19|2017-04-25|Facebook, Inc.|Personalizing default search queries on online social networks| US9128998B2|2010-09-03|2015-09-08|Robert Lewis Jackson, JR.|Presentation of data object hierarchies| US9177041B2|2010-09-03|2015-11-03|Robert Lewis Jackson, JR.|Automated stratification of graph display| US8832111B2|2010-12-30|2014-09-09|Facebook, Inc.|Distributed cache for graph data| US8527497B2|2010-12-30|2013-09-03|Facebook, Inc.|Composite term index for graph data| US8713056B1|2011-03-30|2014-04-29|Open Text S.A.|System, method and computer program product for efficient caching of hierarchical items| US9495477B1|2011-04-20|2016-11-15|Google Inc.|Data storage in a graph processing system| US8725681B1|2011-04-23|2014-05-13|Infoblox Inc.|Synthesized identifiers for system information database| US8977611B2|2011-10-18|2015-03-10|Facebook, Inc.|Ranking objects by social relevance| US8825666B1|2012-01-17|2014-09-02|Netapp, Inc.|Space-efficient, durable key-value map| US9479488B2|2012-01-26|2016-10-25|Facebook, Inc.|Network access based on social-networking information| US8965921B2|2012-06-06|2015-02-24|Rackspace Us, Inc.|Data management and indexing across a distributed database| US8935255B2|2012-07-27|2015-01-13|Facebook, Inc.|Social static ranking for search| US20140074927A1|2012-09-13|2014-03-13|International Business Machines Corporation|Caching content based on social network relations| US9626692B2|2012-10-08|2017-04-18|Facebook, Inc.|On-line advertising with social pay| US9576020B1|2012-10-18|2017-02-21|Proofpoint, Inc.|Methods, systems, and computer program products for storing graph-oriented data on a column-oriented database| US9398104B2|2012-12-20|2016-07-19|Facebook, Inc.|Ranking test framework for search results on an online social network| US9928287B2|2013-02-24|2018-03-27|Technion Research & Development Foundation Limited|Processing query to graph database| US9223826B2|2013-02-25|2015-12-29|Facebook, Inc.|Pushing suggested search queries to mobile devices| US9424330B2|2013-03-15|2016-08-23|Tactile, Inc.|Database sharding by shard levels| US9910887B2|2013-04-25|2018-03-06|Facebook, Inc.|Variable search query vertical access| US9223898B2|2013-05-08|2015-12-29|Facebook, Inc.|Filtering suggested structured queries on online social networks| US9330183B2|2013-05-08|2016-05-03|Facebook, Inc.|Approximate privacy indexing for search queries on online social networks| US10334069B2|2013-05-10|2019-06-25|Dropbox, Inc.|Managing a local cache for an online content-management system| US9330055B2|2013-06-04|2016-05-03|International Business Machines Corporation|Modular architecture for extreme-scale distributed processing applications| US9305322B2|2013-07-23|2016-04-05|Facebook, Inc.|Native application testing| US9298633B1|2013-09-18|2016-03-29|Emc Corporation|Adaptive prefecth for predicted write requests| US9832278B2|2013-09-30|2017-11-28|International Business Machines Corporation|Utility-based invalidation propagation scheme selection for distributed cache consistency| US9450992B2|2013-10-23|2016-09-20|Facebook, Inc.|Node properties in a social-networking system| US9497283B2|2013-12-13|2016-11-15|Oracle International Corporation|System and method for providing data interoperability in a distributed data grid| US9652554B2|2013-12-26|2017-05-16|Facebook, Inc.|Systems and methods for adding users to a networked computer system| US8954441B1|2014-01-02|2015-02-10|Linkedin Corporation|Graph-based system and method of information storage and retrieval| US9336300B2|2014-01-17|2016-05-10|Facebook, Inc.|Client-side search templates for online social networks| US9460137B2|2014-04-18|2016-10-04|International Business Machines Corporation|Handling an increase in transactional data without requiring relocation of preexisting data between shards| US20150302063A1|2014-04-21|2015-10-22|Linkedln Corporation|System and method for searching a distributed node-sharded graph| US10025710B2|2014-04-30|2018-07-17|Walmart Apollo, Llc|Pattern for integrating primary and secondary data stores in a sharded data domain| US9426143B2|2014-07-07|2016-08-23|Facebook, Inc.|Providing social network content based on the login state of a user| CN105468624A|2014-09-04|2016-04-06|上海福网信息科技有限公司|Website interaction caching method and system| US9860316B2|2014-09-19|2018-01-02|Facebook, Inc.|Routing network traffic based on social information| KR20160046235A|2014-10-20|2016-04-28|한국전자통신연구원|Method for generating group of contents cache server and providing contents| US9773272B2|2014-11-10|2017-09-26|0934781 B.C. Ltd.|Recommendation engine| CN104516967A|2014-12-25|2015-04-15|国家电网公司|Electric power system mass data management system and use method thereof| US9483474B2|2015-02-05|2016-11-01|Microsoft Technology Licensing, Llc|Document retrieval/identification using topics| US10382534B1|2015-04-04|2019-08-13|Cisco Technology, Inc.|Selective load balancing of network traffic| US10095683B2|2015-04-10|2018-10-09|Facebook, Inc.|Contextual speller models on online social networks| US10049099B2|2015-04-10|2018-08-14|Facebook, Inc.|Spell correction with hidden markov models on online social networks| US20160299958A1|2015-04-13|2016-10-13|Telefonaktiebolaget L M Ericsson |Method and apparatus for visual logging in networking systems| US10628636B2|2015-04-24|2020-04-21|Facebook, Inc.|Live-conversation modules on online social networks| US10037388B2|2015-04-27|2018-07-31|Microsoft Technology Licensing, Llc|Fast querying of social network data| US10298535B2|2015-05-19|2019-05-21|Facebook, Inc.|Civic issues platforms on online social networks| WO2016190868A1|2015-05-28|2016-12-01|Hewlett Packard Enterprise Development Lp|Processing network data using a graph data structure| US9619391B2|2015-05-28|2017-04-11|International Business Machines Corporation|In-memory caching with on-demand migration| CN106325756B|2015-06-15|2020-04-24|阿里巴巴集团控股有限公司|Data storage method, data calculation method and equipment| US10397167B2|2015-06-19|2019-08-27|Facebook, Inc.|Live social modules on online social networks| US10509832B2|2015-07-13|2019-12-17|Facebook, Inc.|Generating snippet modules on online social networks| US10091087B2|2015-07-20|2018-10-02|Cisco Technology, Inc.|Methods and systems for load balancing based on data shard leader| US10268664B2|2015-08-25|2019-04-23|Facebook, Inc.|Embedding links in user-created content on online social networks| US10810179B2|2015-09-25|2020-10-20|Microsoft Technology Licensing, Llc|Distributed graph database| US10025867B2|2015-09-29|2018-07-17|Facebook, Inc.|Cache efficiency by social graph data ordering| US11005682B2|2015-10-06|2021-05-11|Cisco Technology, Inc.|Policy-driven switch overlay bypass in a hybrid cloud network environment| US10810217B2|2015-10-07|2020-10-20|Facebook, Inc.|Optionalization and fuzzy search on online social networks| US10795936B2|2015-11-06|2020-10-06|Facebook, Inc.|Suppressing entity suggestions on online social networks| US9602965B1|2015-11-06|2017-03-21|Facebook, Inc.|Location-based place determination using online social networks| US10270868B2|2015-11-06|2019-04-23|Facebook, Inc.|Ranking of place-entities on online social networks| US10534814B2|2015-11-11|2020-01-14|Facebook, Inc.|Generating snippets on online social networks| US10523657B2|2015-11-16|2019-12-31|Cisco Technology, Inc.|Endpoint privacy preservation with cloud conferencing| US10387511B2|2015-11-25|2019-08-20|Facebook, Inc.|Text-to-media indexes on online social networks| US10740368B2|2015-12-29|2020-08-11|Facebook, Inc.|Query-composition platforms on online social networks| US10019466B2|2016-01-11|2018-07-10|Facebook, Inc.|Identification of low-quality place-entities on online social networks| US10162899B2|2016-01-15|2018-12-25|Facebook, Inc.|Typeahead intent icons and snippets on online social networks| US10262039B1|2016-01-15|2019-04-16|Facebook, Inc.|Proximity-based searching on online social networks| US10740375B2|2016-01-20|2020-08-11|Facebook, Inc.|Generating answers to questions using information posted by users on online social networks| US10242074B2|2016-02-03|2019-03-26|Facebook, Inc.|Search-results interfaces for content-item-specific modules on online social networks| US10157224B2|2016-02-03|2018-12-18|Facebook, Inc.|Quotations-modules on online social networks| US10270882B2|2016-02-03|2019-04-23|Facebook, Inc.|Mentions-modules on online social networks| US10216850B2|2016-02-03|2019-02-26|Facebook, Inc.|Sentiment-modules on online social networks| US20170293593A1|2016-04-12|2017-10-12|International Business Machines Corporation|Managing node pagination for a graph data set| US10452671B2|2016-04-26|2019-10-22|Facebook, Inc.|Recommendations from comments on online social networks| KR101825294B1|2016-05-10|2018-02-02|한양대학교 에리카산학협력단|Method and Apparatus for Distinguishing Data of Storage Servers for Services with Relationship and Temporal Trend| US10659283B2|2016-07-08|2020-05-19|Cisco Technology, Inc.|Reducing ARP/ND flooding in cloud environment| US10635661B2|2016-07-11|2020-04-28|Facebook, Inc.|Keyboard-based corrections for search queries on online social networks| US10263898B2|2016-07-20|2019-04-16|Cisco Technology, Inc.|System and method for implementing universal cloud classificationas a service | WO2018020495A1|2016-07-27|2018-02-01|Epistema Ltd.|Computerized environment for human expert analysts| US10540360B2|2016-07-29|2020-01-21|Hewlett Packard Enterprise Development Lp|Identifying relationship instances between entities| US10223464B2|2016-08-04|2019-03-05|Facebook, Inc.|Suggesting filters for search on online social networks| US10282483B2|2016-08-04|2019-05-07|Facebook, Inc.|Client-side caching of search keywords for online social networks| US10552450B2|2016-08-05|2020-02-04|International Business Machines Corporation|Distributed graph databases that facilitate streaming data insertion and low latency graph queries| US9787705B1|2016-08-19|2017-10-10|Quid, Inc.|Extracting insightful nodes from graphs| US10726022B2|2016-08-26|2020-07-28|Facebook, Inc.|Classifying search queries on online social networks| US10534815B2|2016-08-30|2020-01-14|Facebook, Inc.|Customized keyword query suggestions on online social networks| US10102255B2|2016-09-08|2018-10-16|Facebook, Inc.|Categorizing objects for queries on online social networks| US10645142B2|2016-09-20|2020-05-05|Facebook, Inc.|Video keyframes display on online social networks| US10026021B2|2016-09-27|2018-07-17|Facebook, Inc.|Training image-recognition systems using a joint embedding model on online social networks| US10083379B2|2016-09-27|2018-09-25|Facebook, Inc.|Training image-recognition systems based on search queries on online social networks| US10579688B2|2016-10-05|2020-03-03|Facebook, Inc.|Search ranking and recommendations for online social networks based on reconstructed embeddings| KR101828328B1|2016-10-26|2018-03-22|주식회사 리얼타임테크|Apparatus for Operating Multiple Database in Embedded Database System and Method thereof| US10417134B2|2016-11-10|2019-09-17|Oracle International Corporation|Cache memory architecture and policies for accelerating graph algorithms| US10311117B2|2016-11-18|2019-06-04|Facebook, Inc.|Entity linking to query terms on online social networks| US10650009B2|2016-11-22|2020-05-12|Facebook, Inc.|Generating news headlines on online social networks| US10235469B2|2016-11-30|2019-03-19|Facebook, Inc.|Searching for posts by related entities on online social networks| US10162886B2|2016-11-30|2018-12-25|Facebook, Inc.|Embedding-based parsing of search queries on online social networks| US10313456B2|2016-11-30|2019-06-04|Facebook, Inc.|Multi-stage filtering for recommended user connections on online social networks| US10185763B2|2016-11-30|2019-01-22|Facebook, Inc.|Syntactic models for parsing search queries on online social networks| US11044162B2|2016-12-06|2021-06-22|Cisco Technology, Inc.|Orchestration of cloud and fog interactions| US10362110B1|2016-12-08|2019-07-23|Amazon Technologies, Inc.|Deployment of client data compute kernels in cloud| US10326817B2|2016-12-20|2019-06-18|Cisco Technology, Inc.|System and method for quality-aware recording in large scale collaborate clouds| US10607148B1|2016-12-21|2020-03-31|Facebook, Inc.|User identification with voiceprints on online social networks| US11223699B1|2016-12-21|2022-01-11|Facebook, Inc.|Multiple user recognition with voiceprints on online social networks| US10535106B2|2016-12-28|2020-01-14|Facebook, Inc.|Selecting user posts related to trending topics on online social networks| US10334029B2|2017-01-10|2019-06-25|Cisco Technology, Inc.|Forming neighborhood groups from disperse cloud providers| US10542088B2|2017-01-18|2020-01-21|Microsoft Technology Licensing, Llc|Modifying data resources within party-partitioned storage areas| US10838819B2|2017-01-18|2020-11-17|Microsoft Technology Licensing, Llc|Including personal relationship metadata within duplicated resources shared across partitioned storage| US10536465B2|2017-01-18|2020-01-14|Microsoft Technology Licensing, Llc|Security for accessing stored resources| US10552191B2|2017-01-26|2020-02-04|Cisco Technology, Inc.|Distributed hybrid cloud orchestration model| US10489472B2|2017-02-13|2019-11-26|Facebook, Inc.|Context-based search suggestions on online social networks| US10445321B2|2017-02-21|2019-10-15|Microsoft Technology Licensing, Llc|Multi-tenant distribution of graph database caches| US10614141B2|2017-03-15|2020-04-07|Facebook, Inc.|Vital author snippets on online social networks| US10769222B2|2017-03-20|2020-09-08|Facebook, Inc.|Search result ranking based on post classifiers on online social networks| US10445319B2|2017-05-10|2019-10-15|Oracle International Corporation|Defining subgraphs declaratively with vertex and edge filters| US10248645B2|2017-05-30|2019-04-02|Facebook, Inc.|Measuring phrase association on online social networks| US10268646B2|2017-06-06|2019-04-23|Facebook, Inc.|Tensor-based deep relevance model for search on online social networks| US10892940B2|2017-07-21|2021-01-12|Cisco Technology, Inc.|Scalable statistics and analytics mechanisms in cloud networking| US10489468B2|2017-08-22|2019-11-26|Facebook, Inc.|Similarity search using progressive inner products and bounds| US10887235B2|2017-08-24|2021-01-05|Google Llc|Method of executing a tuple graph program across a network| US10776437B2|2017-09-12|2020-09-15|Facebook, Inc.|Time-window counters for search results on online social networks| CN107622124B|2017-09-28|2021-02-02|深圳市华傲数据技术有限公司|Data query method and system based on block data| US10678786B2|2017-10-09|2020-06-09|Facebook, Inc.|Translating search queries on online social networks| US10810214B2|2017-11-22|2020-10-20|Facebook, Inc.|Determining related query terms through query-post associations on online social networks| US10963514B2|2017-11-30|2021-03-30|Facebook, Inc.|Using related mentions to enhance link probability on online social networks| US10129705B1|2017-12-11|2018-11-13|Facebook, Inc.|Location prediction using wireless signals on online social networks| KR102036419B1|2017-12-27|2019-10-24|충북대학교 산학협력단|Multi-level caching method for improving graph processing performance, and multi-level caching system| US11055286B2|2018-03-23|2021-07-06|Amazon Technologies, Inc.|Incremental updates for nearest neighbor search| US10565229B2|2018-05-24|2020-02-18|People.ai, Inc.|Systems and methods for matching electronic activities directly to record objects of systems of record| US11044090B2|2018-07-24|2021-06-22|ZenDesk, Inc.|Facilitating request authentication at a network edge device| US10732861B2|2018-07-26|2020-08-04|Qualtrics, Llc|Generating and providing low-latency cached content| CN109344269A|2018-08-14|2019-02-15|北京奇虎科技有限公司|Method, electronic equipment and the computer readable storage medium of graphic data base write-in| WO2020037625A1|2018-08-23|2020-02-27|袁振南|Distributed storage system and data read-write method therefor, and storage terminal and storage medium| CN110896404B|2018-09-12|2021-09-14|华为技术有限公司|Data processing method and device and computing node| CN109274762B|2018-10-22|2021-10-29|杭州领智云画科技有限公司|CDN refreshing method and system| CN110134704B|2019-05-31|2021-11-02|厦门大学嘉庚学院|Big data cluster transaction implementation method based on distributed cache| KR102325047B1|2019-06-10|2021-11-11|포항공과대학교 산학협력단|Grahp data processing methdo and apparatus thereof| CN111010672A|2019-11-18|2020-04-14|杭州电子科技大学|Wireless sensor network data transmission method based on cache node filtering| US11256759B1|2019-12-23|2022-02-22|Lacework Inc.|Hierarchical graph analysis|
法律状态:
2020-11-10| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-02-23| B11B| Dismissal acc. art. 36, par 1 of ipl - no reply within 90 days to fullfil the necessary requirements| 2021-12-07| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201061428799P| true| 2010-12-30|2010-12-30| US61/428,799|2010-12-30| US13/227,381|2011-09-07| US13/227,381|US8612688B2|2010-12-30|2011-09-07|Distributed cache for graph data| PCT/US2011/062609|WO2012091846A2|2010-12-30|2011-11-30|Distributed cache for graph data| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|